口服食物挑战(OFC)对于准确诊断患者的食物过敏至关重要。但是,患者不愿接受OFC,对于那些这样做的患者,在农村/社区医疗保健环境中,对过敏症患者的使用率有限。通过机器学习方法对OFC结果的预测可以促进在家中食品过敏原的删除,在OFC中改善患者和医师的舒适度,并通过最大程度地减少执行的OFC的数量来节省医疗资源。临床数据是从共同接受1,284个OFC的1,12例患者那里收集的,包括临床因素,包括血清特异性IgE,总IgE,皮肤刺测试(SPTS),症状,性别和年龄。使用这些临床特征,构建了机器学习模型,以预测花生,鸡蛋和牛奶挑战的结果。每种过敏原的最佳性能模型是使用凹入和凸内核(LUCCK)方法创建的,该方法在曲线(AUC)(AUC)下分别用于花生,鸡蛋和牛奶OFC预测为0.76、0.68和0.70, 。通过Shapley添加说明(SHAP)的模型解释表明,特定的IgE以及SPTS的Wheal和Flare值高度预测了OFC结果。该分析的结果表明,机器学习有可能预测OFC结果,并揭示了相关的临床因素进行进一步研究。
translated by 谷歌翻译
模型的可解释性对于许多实际应用是必不可少的,例如临床决策支持系统。在本文中,提出了一种新的可解释机学习方法,可以模拟人类理解规则中的输入变量与响应之间的关系。该方法是通过将热带几何形状应用于模糊推理系统构建的,其中通过监督学习可以发现可变编码功能和突出规则。进行了使用合成数据集的实验,以研究所提出的算法在分类和规则发现中的性能和容量。此外,将所提出的方法应用于鉴定心力衰竭患者的临床应用,这些患者将受益于心脏移植或耐用的机械循环支撑等先进的疗法。实验结果表明,该网络在分类任务方面取得了很大的表现。除了从数据集中学习人类可理解的规则外,现有的模糊域知识可以很容易地转移到网络中,并用于促进模型培训。从我们的结果,所提出的模型和学习现有领域知识的能力可以显着提高模型的概括性。所提出的网络的特征使其在需要模型可靠性和理由的应用中承诺。
translated by 谷歌翻译
This literature review identifies indicators that associate with higher impact or higher quality research from article text (e.g., titles, abstracts, lengths, cited references and readability) or metadata (e.g., the number of authors, international or domestic collaborations, journal impact factors and authors' h-index). This includes studies that used machine learning techniques to predict citation counts or quality scores for journal articles or conference papers. The literature review also includes evidence about the strength of association between bibliometric indicators and quality score rankings from previous UK Research Assessment Exercises (RAEs) and REFs in different subjects and years and similar evidence from other countries (e.g., Australia and Italy). In support of this, the document also surveys studies that used public datasets of citations, social media indictors or open review texts (e.g., Dimensions, OpenCitations, Altmetric.com and Publons) to help predict the scholarly impact of articles. The results of this part of the literature review were used to inform the experiments using machine learning to predict REF journal article quality scores, as reported in the AI experiments report for this project. The literature review also covers technology to automate editorial processes, to provide quality control for papers and reviewers' suggestions, to match reviewers with articles, and to automatically categorise journal articles into fields. Bias and transparency in technology assisted assessment are also discussed.
translated by 谷歌翻译
National research evaluation initiatives and incentive schemes have previously chosen between simplistic quantitative indicators and time-consuming peer review, sometimes supported by bibliometrics. Here we assess whether artificial intelligence (AI) could provide a third alternative, estimating article quality using more multiple bibliometric and metadata inputs. We investigated this using provisional three-level REF2021 peer review scores for 84,966 articles submitted to the UK Research Excellence Framework 2021, matching a Scopus record 2014-18 and with a substantial abstract. We found that accuracy is highest in the medical and physical sciences Units of Assessment (UoAs) and economics, reaching 42% above the baseline (72% overall) in the best case. This is based on 1000 bibliometric inputs and half of the articles used for training in each UoA. Prediction accuracies above the baseline for the social science, mathematics, engineering, arts, and humanities UoAs were much lower or close to zero. The Random Forest Classifier (standard or ordinal) and Extreme Gradient Boosting Classifier algorithms performed best from the 32 tested. Accuracy was lower if UoAs were merged or replaced by Scopus broad categories. We increased accuracy with an active learning strategy and by selecting articles with higher prediction probabilities, as estimated by the algorithms, but this substantially reduced the number of scores predicted.
translated by 谷歌翻译